57 research outputs found

    Functional Group Bridge for Simultaneous Regression and Support Estimation

    Full text link
    This article is motivated by studying multisensory effects on brain activities in intracranial electroencephalography (iEEG) experiments. Differential brain activities to multisensory stimulus presentations are zero in most regions and non-zero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports. We propose a weighted group bridge approach for simultaneous function estimation and support recovery in function-on-scalar mixed effect models, while accounting for heterogeneity present in functional data. We use B-splines to transform sparsity of functions to its sparse vector counterpart of increasing dimension, and propose a fast non-convex optimization algorithm using nested alternative direction method of multipliers (ADMM) for estimation. Large sample properties are established. In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the L2L_2 norm and resemble a phase transition phenomenon. For support estimation, we derive a convergence rate under the LL_{\infty} norm that leads to a sparsistency property under δ\delta-sparsity, and provide a simple sufficient regularity condition under which a strict sparsistency property is established. An adjusted extended Bayesian information criterion is proposed for parameter tuning. The developed method is illustrated through simulation and an application to a novel iEEG dataset to study multisensory integration. We integrate the proposed method into RAVE, an R package that gains increasing popularity in the iEEG community

    Do Eye Movements During Shape Discrimination Reveal an Underlying Geometric Structure?

    Get PDF
    Using a psychophysical approach coupled with eye-tracking measures, we varied length and width of shape stimuli to determine the objective parameters that corresponded to subjective determination of square/rectangle judgments. Participants viewed a two-dimensional shape stimulus and made a two-alternative forced-choice whether it was a square or rectangle. Participants’ gaze was tracked throughout the task to explore directed visual attention to the vertical and horizontal axes of space. Behavioral results provide threshold values for two-dimensional square/rectangle perception, and eye-tracking data indicated that participants directed attention to the major and minor principal axes. Results are consistent with the use of the major and minor principal axis of space for shape perception and may have theoretical and empirical implications for orientation via geometric cues

    A Comparison of Neuroelectrophysiology Databases

    Full text link
    As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. These archives provide researchers with tools to store, share, and reanalyze neurophysiology data though the means of accomplishing these objectives differ. The Brain Imaging Data Structure (BIDS) and Neurodata Without Borders (NWB) are utilized by these archives to make data more accessible to researchers by implementing a common standard. While many tools are available to reanalyze data on and off the archives' platforms, this article features Reproducible Analysis and Visualization of Intracranial EEG (RAVE) toolkit, developed specifically for the analysis of intracranial signal data and integrated with the discussed standards and archives. Neuroelectrophysiology data archives improve how researchers can aggregate, analyze, distribute, and parse these data, which can lead to more significant findings in neuroscience research.Comment: 25 pages, 8 figures, 1 tabl

    Alveolar hypoxia, alveolar macrophages, and systemic inflammation

    Get PDF
    Diseases featuring abnormally low alveolar PO2 are frequently accompanied by systemic effects. The common presence of an underlying inflammatory component suggests that inflammation may contribute to the pathogenesis of the systemic effects of alveolar hypoxia. While the role of alveolar macrophages in the immune and defense functions of the lung has been long known, recent evidence indicates that activation of alveolar macrophages causes inflammatory disturbances in the systemic microcirculation. The purpose of this review is to describe observations in experimental animals showing that alveolar macrophages initiate a systemic inflammatory response to alveolar hypoxia. Evidence obtained in intact animals and in primary cell cultures indicate that alveolar macrophages activated by hypoxia release a mediator(s) into the circulation. This mediator activates perivascular mast cells and initiates a widespread systemic inflammation. The inflammatory cascade includes activation of the local renin-angiotensin system and results in increased leukocyte-endothelial interactions in post-capillary venules, increased microvascular levels of reactive O2 species; and extravasation of albumin. Given the known extrapulmonary responses elicited by activation of alveolar macrophages, this novel phenomenon could contribute to some of the systemic effects of conditions featuring low alveolar PO2

    A Causal Inference Model Explains Perception of the McGurk Effect and Other Incongruent Audiovisual Speech.

    No full text
    Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others

    Published estimates of group differences in multisensory integration are inflated.

    No full text
    A common measure of multisensory integration is the McGurk effect, an illusion in which incongruent auditory and visual speech are integrated to produce an entirely different percept. Published studies report that participants who differ in age, gender, culture, native language, or traits related to neurological or psychiatric disorders also differ in their susceptibility to the McGurk effect. These group-level differences are used as evidence for fundamental alterations in sensory processing between populations. Using empirical data and statistical simulations tested under a range of conditions, we show that published estimates of group differences in the McGurk effect are inflated when only statistically significant (p < 0.05) results are published. With a sample size typical of published studies, a group difference of 10% would be reported as 31%. As a consequence of this inflation, follow-up studies often fail to replicate published reports of large between-group differences. Inaccurate estimates of effect sizes and replication failures are especially problematic in studies of clinical populations involving expensive and time-consuming interventions, such as training paradigms to improve sensory processing. Reducing effect size inflation and increasing replicability requires increasing the number of participants by an order of magnitude compared with current practice

    The Temporal Dynamics of Cognitive Processing

    No full text
    From our ability to attend to many stimuli occurring in rapid succession to the transformation of memories during a night of sleep, cognition occurs over widely varying time scales spanning milliseconds to days and beyond. Cognitive processing is often influenced by several behavioral variables as well as nonlinear interactions between multiple neural systems. This frequently produces unpredictable patterns of behavior and makes understanding the underlying temporal factors influencing cognition a fruitful area of hypothesis development and scientific inquiry. Across two reviews, a perspective, and twelve original research articles covering the domains of learning, memory, attention, cognitive control, and social decision making this research topic sheds new light on the temporal dynamics of cognitive processing

    Modeling of multisensory speech perception without causal inference.

    No full text
    <p>(A) There are two possible causal structures for a given audiovisual speech stimulus. If there is a common cause (<i>C</i> = 1), a single talker generates the auditory and visual speech. Alternatively, if there is not a common cause (<i>C</i> = 2), two separate talkers generate the auditory and visual speech. (B) We generate multisensory representations in a two-dimensional representational space. The prototypes of the syllables “ba,” “da,” and “ga” (location of text labels) are mapped into the representational space with locations determined by pairwise confusability. The x-axis represents auditory features; the y-axis represents visual features. (C) Encoding the auditory “ba” + visual “ga” (AbaVga) McGurk stimulus. The unisensory components of the stimulus are encoded with noise that is independent across modalities. On three trials in which an identical AbaVga stimulus is presented (represented as 1, 2, 3) the encoded representations of the auditory and visual components differ because of sensory noise, although they are centered on the prototype (gray ellipses show 95% probability region across all presentations). Shapes of ellipses reflect reliability of each modality: for auditory “ba” (ellipse labeled A), the ellipse has its short axis along the auditory x-axis; visual “ga” (ellipse labeled V) has its short axis along the visual y-axis. (D) On each trial, the unisensory representations are integrated using Bayes’ rule to produce an integrated representation that is located between the unisensory components in representational space. Numbers show the actual location of the integrated unisensory representations from <b><i>C</i></b>. Because of reliability weighting, the integrated representations are closer to “ga” along the visual y-axis, but closer to “ba” along the auditory x-axis (ellipse shows 95% probability region across all presentations). (E) Without causal inference (non-CIMS), the AV representation is the final representation. On most trials, the representation lies in the “da” region of representational space (numbers and 95% probability ellipse from <b>D</b>). (F) A linear decision rule is applied, resulting in a model prediction of exclusively “da” percepts across trials. (G) Behavioral data from 60 subjects reporting their percept of auditory “ba” + visual “ga”. Across trials, subjects reported the “ba” percept for 57% of trials and “da” for 40% of trials. (H) Encoding the auditory “ga” + visual “ba” (AgaVba) incongruent non-McGurk stimulus. The unisensory components are encoded with modality-specific noise; the auditory “ga” ellipse has its short axis along the auditory axis, the visual “ba” ellipse has its short axis along the visual axis. (I) Across many trials, the integrated representation (AV) is closer to “ga” along the auditory x-axis, but closer to “ba” along the visual <i>y</i>-axis. (J) Over many trials, the integrated representation is found most often in the “da” region of perceptual space. (K) Across trials, the non-CIMS model predicts “da” for the non-McGurk stimulus. (L) Behavioral data from 60 subjects reporting their perception of AgaVba. Subjects reported “ga” on 96% of trials.</p

    Generalizability of models tested with other audiovisual syllables.

    No full text
    <p>(A) Behavior for congruent syllables. Each row represents a different congruent audiovisual syllable (AbaVba, AdaVda, AgaVga). Subjects made a three-alternative forced choice (ba, ga, da). The colors within each row show how often subjects reported each choice when presented with each syllable (<i>e</i>.<i>g</i>. for AbaVba, they always reported “ba”). (B) Non-CIMS model predictions for congruent syllables. Rows show syllables, colors across columns within each row show how often model predicted that percept (darker colors indicate higher percentages). (C) CIMS model predictions for congruent syllables. (D) Behavior for incongruent syllables. Each row represents a different incongruent audiovisual syllable. Subjects made a three-alternative forced choice (ba, ga, da). The colors within each row show how often subjects reported each choice when presented with each syllable (<i>e</i>.<i>g</i>. for AbaVda, they more often reported “ba”, less often reported “da”, never reported “ga”). (E) Non-CIMS model predictions for incongruent syllables. Rows show syllables, colors across columns within each row show how often model predicted that percept (darker colors indicate higher percentages). (F) CIMS model predictions for incongruent syllables.</p
    corecore